2,552 research outputs found
Deterministic Sparse Pattern Matching via the Baur-Strassen Theorem
How fast can you test whether a constellation of stars appears in the night
sky? This question can be modeled as the computational problem of testing
whether a set of points can be moved into (or close to) another set
under some prescribed group of transformations.
Consider, as a simple representative, the following problem: Given two sets
of at most integers , determine whether there is some
shift such that shifted by is a subset of , i.e.,
. This problem, to which we refer as the
Constellation problem, can be solved in near-linear time by a
Monte Carlo randomized algorithm [Cardoze, Schulman; FOCS'98] and time
by a Las Vegas randomized algorithm [Cole, Hariharan; STOC'02].
Moreover, there is a deterministic algorithm running in time
[Chan, Lewenstein; STOC'15]. An
interesting question left open by these previous works is whether Constellation
is in deterministic near-linear time (i.e., with only polylogarithmic
overhead).
We answer this question positively by giving an -time
deterministic algorithm for the Constellation problem. Our algorithm extends to
various more complex Point Pattern Matching problems in higher dimensions,
under translations and rigid motions, and possibly with mismatches, and also to
a near-linear-time derandomization of the Sparse Wildcard Matching problem on
strings.
We find it particularly interesting how we obtain our deterministic
algorithm. All previous algorithms are based on the same baseline idea, using
additive hashing and the Fast Fourier Transform. In contrast, our algorithms
are based on new ideas, involving a surprising blend of combinatorial and
algebraic techniques. At the heart lies an innovative application of the
Baur-Strassen theorem from algebraic complexity theory.Comment: Abstract shortened to fit arxiv requirement
Algorithms for sparse convolution and sublinear edit distance
In this PhD thesis on fine-grained algorithm design and complexity, we investigate output-sensitive and sublinear-time algorithms for two important problems. (1) Sparse Convolution: Computing the convolution of two vectors is a basic algorithmic primitive with applications across all of Computer Science and Engineering. In the sparse convolution problem we assume that the input and output vectors have at most t nonzero entries, and the goal is to design algorithms with running times dependent on t. For the special case where all entries are nonnegative, which is particularly important for algorithm design, it is known since twenty years that sparse convolutions can be computed in near-linear randomized time O(t log^2 n). In this thesis we develop a randomized algorithm with running time O(t \log t) which is optimal (under some mild assumptions), and the first near-linear deterministic algorithm for sparse nonnegative convolution. We also present an application of these results, leading to seemingly unrelated fine-grained lower bounds against distance oracles in graphs. (2) Sublinear Edit Distance: The edit distance of two strings is a well-studied similarity measure with numerous applications in computational biology. While computing the edit distance exactly provably requires quadratic time, a long line of research has lead to a constant-factor approximation algorithm in almost-linear time. Perhaps surprisingly, it is also possible to approximate the edit distance k within a large factor O(k) in sublinear time O~(n/k + poly(k)). We drastically improve the approximation factor of the known sublinear algorithms from O(k) to k^{o(1)} while preserving the O(n/k + poly(k)) running time.In dieser Doktorarbeit über feinkörnige Algorithmen und Komplexität untersuchen wir ausgabesensitive Algorithmen und Algorithmen mit sublinearer Lauf-zeit für zwei wichtige Probleme. (1) Dünne Faltungen: Die Berechnung der Faltung zweier Vektoren ist ein grundlegendes algorithmisches Primitiv, das in allen Bereichen der Informatik und des Ingenieurwesens Anwendung findet. Für das dünne Faltungsproblem nehmen wir an, dass die Eingabe- und Ausgabevektoren höchstens t Einträge ungleich Null haben, und das Ziel ist, Algorithmen mit Laufzeiten in Abhängigkeit von t zu entwickeln. Für den speziellen Fall, dass alle Einträge nicht-negativ sind, was insbesondere für den Entwurf von Algorithmen relevant ist, ist seit zwanzig Jahren bekannt, dass dünn besetzte Faltungen in nahezu linearer randomisierter Zeit O(t \log^2 n) berechnet werden können. In dieser Arbeit entwickeln wir einen randomisierten Algorithmus mit Laufzeit O(t \log t), der (unter milden Annahmen) optimal ist, und den ersten nahezu linearen deterministischen Algorithmus für dünne nichtnegative Faltungen. Wir stellen auch eine Anwendung dieser Ergebnisse vor, die zu scheinbar unverwandten feinkörnigen unteren Schranken gegen Distanzorakel in Graphen führt. (2) Sublineare Editierdistanz: Die Editierdistanz zweier Zeichenketten ist ein gut untersuchtes Ähnlichkeitsmaß mit zahlreichen Anwendungen in der Computerbiologie. Während die exakte Berechnung der Editierdistanz nachweislich quadratische Zeit erfordert, hat eine lange Reihe von Forschungsarbeiten zu einem Approximationsalgorithmus mit konstantem Faktor in fast-linearer Zeit geführt. Überraschenderweise ist es auch möglich, die Editierdistanz k innerhalb eines großen Faktors O(k) in sublinearer Zeit O~(n/k + poly(k)) zu approximieren. Wir verbessern drastisch den Approximationsfaktor der bekannten sublinearen Algorithmen von O(k) auf k^{o(1)} unter Beibehaltung der O(n/k + poly(k))-Laufzeit
Negative-Weight Single-Source Shortest Paths in Near-Linear Time: Now Faster!
In this work we revisit the fundamental Single-Source Shortest Paths (SSSP)
problem with possibly negative edge weights. A recent breakthrough result by
Bernstein, Nanongkai and Wulff-Nilsen established a near-linear -time algorithm for negative-weight SSSP, where is an upper bound
on the magnitude of the smallest negative-weight edge. In this work we improve
the running time to , which is an
improvement by nearly six log-factors. Some of these log-factors are easy to
shave (e.g. replacing the priority queue used in Dijkstra's algorithm), while
others are significantly more involved (e.g. to find negative cycles we design
an algorithm reminiscent of noisy binary search and analyze it with drift
analysis).
As side results, we obtain an algorithm to compute the minimum cycle mean in
the same running time as well as a new construction for computing Low-Diameter
Decompositions in directed graphs
Stronger 3-SUM Lower Bounds for Approximate Distance Oracles via Additive Combinatorics
The "short cycle removal" technique was recently introduced by Abboud,
Bringmann, Khoury and Zamir (STOC '22) to prove fine-grained hardness of
approximation. Its main technical result is that listing all triangles in an
-regular graph is -hard under the 3-SUM conjecture even
when the number of short cycles is small; namely, when the number of -cycles
is for .
Abboud et al. achieve by applying structure vs. randomness
arguments on graphs. In this paper, we take a step back and apply conceptually
similar arguments on the numbers of the 3-SUM problem. Consequently, we achieve
the best possible and the following lower bounds under the 3-SUM
conjecture:
* Approximate distance oracles: The seminal Thorup-Zwick distance oracles
achieve stretch after preprocessing a graph in
time. For the same stretch, and assuming the query time is Abboud et
al. proved an lower bound on the
preprocessing time; we improve it to which is only a
factor 2 away from the upper bound. We also obtain tight bounds for stretch
and and higher lower bounds for dynamic shortest paths.
* Listing 4-cycles: Abboud et al. proved the first super-linear lower bound
for listing all 4-cycles in a graph, ruling out time
algorithms where is the number of 4-cycles. We settle the complexity of
this basic problem by showing that the
upper bound is tight up to factors.
Our results exploit a rich tool set from additive combinatorics, most notably
the Balog-Szemer\'edi-Gowers theorem and Rusza's covering lemma. A key
ingredient that may be of independent interest is a subquadratic algorithm for
3-SUM if one of the sets has small doubling.Comment: Abstract shortened to fit arXiv requirement
The Hardship That is Internet Deprivation and What it Means for Sentencing: Development of the Internet Sanction and Connectivity for Prisoners
Twenty years ago, the internet was a novel tool. Now it is such an ingrained part of most people’s lives that they experience and exhibit signs of anxiety and stress if they cannot access it. Non-accessibility to the internet can also tangibly set back peoples’ social, educational, financial, and vocational pursuits and interests. In this Article, we argue that the sentencing law needs to be reformed to adapt to the fundamental changes in human behavior caused by the internet.
We present three novel and major implications for the sentencing law and practice in the era of the internet. First, we argue that denial of access to the internet should be developed as a discrete sentencing sanction, which can be invoked for relatively minor offenses in much the same way that deprivation of other entitlements or privileges, such as the right to drive a motor vehicle, are currently imposed for certain crimes.
Second, we argue that prisoners should have unfettered access to the internet. This would lessen the pain stemming from incarceration in a manner which does not undermine the principal objectives of imprisonment—community protection and infliction of a hardship—while at the same time providing prisoners with the opportunity to develop skills, knowledge, and relationships that will better equip them for a productive life once they are released. Previous arguments that have been made for denying internet access to prisoners are unsound. Technological advances can readily curb supposed risks associated with prisoners using the internet.
Finally, if the second recommendation is not adopted, and prisoners continue to be denied access to the internet, there should be an acknowledgement that the burden of imprisonment is greater than is currently acknowledged. The internet is now such an ingrained and important aspect of people’s lives that prohibiting its use is a cause of considerable unpleasantness. This leads to our third proposal: continued denial of the internet to prisoners should result in a recalibration of the pain of imprisonment such that a sentencing reduction should be conferred to prisoners
Fine-Grained Completeness for Optimization in P
We initiate the study of fine-grained completeness theorems for exact and
approximate optimization in the polynomial-time regime. Inspired by the first
completeness results for decision problems in P (Gao, Impagliazzo, Kolokolova,
Williams, TALG 2019) as well as the classic class MaxSNP and
MaxSNP-completeness for NP optimization problems (Papadimitriou, Yannakakis,
JCSS 1991), we define polynomial-time analogues MaxSP and MinSP, which contain
a number of natural optimization problems in P, including Maximum Inner
Product, general forms of nearest neighbor search and optimization variants of
the -XOR problem. Specifically, we define MaxSP as the class of problems
definable as , where is a quantifier-free
first-order property over a given relational structure (with MinSP defined
analogously). On -sized structures, we can solve each such problem in time
. Our results are:
- We determine (a sparse variant of) the Maximum/Minimum Inner Product
problem as complete under *deterministic* fine-grained reductions: A strongly
subquadratic algorithm for Maximum/Minimum Inner Product would beat the
baseline running time of for *all* problems in MaxSP/MinSP by
a polynomial factor.
- This completeness transfers to approximation: Maximum/Minimum Inner Product
is also complete in the sense that a strongly subquadratic -approximation
would give a -approximation for all MaxSP/MinSP problems in
time , where can be chosen
arbitrarily small. Combining our completeness with~(Chen, Williams, SODA 2019),
we obtain the perhaps surprising consequence that refuting the OV Hypothesis is
*equivalent* to giving a -approximation for all MinSP problems in
faster-than- time.Comment: Full version of APPROX'21 paper, abstract shortened to fit ArXiv
requirement
Faster Minimization of Tardy Processing Time on a Single Machine
This paper is concerned with the problem, the problem of minimizing the total processing time of tardy jobs on a single machine. This is not only a fundamental scheduling problem, but also a very important problem from a theoretical point of view as it generalizes the Subset Sum problem and is closely related to the 0/1-Knapsack problem. The problem is well-known to be NP-hard, but only in a weak sense, meaning it admits pseudo-polynomial time algorithms. The fastest known pseudo-polynomial time algorithm for the problem is the famous Lawler and Moore algorithm which runs in time, where is the total processing time of all jobs in the input. This algorithm has been developed in the late 60s, and has yet to be improved to date. In this paper we develop two new algorithms for , each improving on Lawler and Moore's algorithm in a different scenario. Both algorithms rely on basic primitive operations between sets of integers and vectors of integers for the speedup in their running times. The second algorithm relies on fast polynomial multiplication as its main engine, while for the first algorithm we define a new "skewed" version of -convolution which is interesting in its own right
Using the TIDieR checklist to describe health visitor support for mothers with mental health problems: analysis of a cross-sectional survey
At least half of the 20% of mothers who experience mental health problems (MHPs) during pregnancy or after birth are not receiving the help they need that will lead to recovery. In order to identify where improvements need to be made it is necessary to describe exactly what is being done and the barriers and facilitators that compromise or enhance optimal care. The majority of mothers experience mild to moderate anxiety or depression. The expectation is that primary care professionals, such as health visitors (HVs), can provide the support they need that will lead to recovery. The aim of this study was explore the views of HVs regarding the content and purpose of an intervention to support mothers with MHPs, described as ‘listening visits’ (LVs). A link to an on-line survey was offered to the members and champions of the Institute of Health Visiting (n=9,474) March – May 2016. The survey was completed by 1599 (17%) of the target population, of whom 85% were offering LVs. The Template for Intervention Description and Replication (TIDieR) checklist was used to provide a framework to describe commonalities and variations in practice. There appeared to be a shared understanding of the rationale for LVs but a lack of agreement about what the intervention should be called, the techniques that should be used and the duration, frequency and expected outcomes of the intervention. Contextual factors such as staff shortages; conflicting priorities; the needs and circumstances of mothers; the capability and motivation of HVs; inadequate training and supervision; and absence of clear guidance contributed to variations in perceptions and practice. There are many ways in which the HV contribution to the assessment and management of mothers with MHPs could be improved. The intervention delivered by HVs needs to be more clearly articulated. The contextual factors influencing competent and consistent practice also need to be addressed
- …